翻訳と辞書
Words near each other
・ Neurocossus pinratanai
・ Neurocossus speideli
・ Neural engineering
・ Neural Engineering Object
・ Neural ensemble
・ Neural facilitation
・ Neural fibrolipoma
・ Neural fold
・ Neural gas
・ Neural groove
・ Neural Impulse Actuator
・ Neural Lab
・ Neural machine translation
・ Neural magazine
・ Neural mechanisms of mindfulness meditation
Neural modeling fields
・ Neural network (disambiguation)
・ Neural network software
・ Neural network synchronization protocol
・ Neural Networks (journal)
・ Neural oscillation
・ Neural pathway
・ Neural Plasticity (journal)
・ Neural plate
・ Neural processing for individual categories of objects
・ Neural Regeneration Research
・ Neural stem cell
・ Neural substrate
・ Neural substrate of locomotor central pattern generators in mammals
・ Neural therapy


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Neural modeling fields : ウィキペディア英語版
Neural modeling fields
Neural modeling field (NMF) is a mathematical framework for machine learning which combines ideas from neural networks, fuzzy logic, and model based recognition. It has also been referred to as modeling fields, modeling fields theory (MFT), Maximum likelihood artificial neural networks (MLANS).
〔(): Perlovsky, L.I. 2001. Neural Networks and Intellect: using model based concepts. New York: Oxford University Press〕
〔Perlovsky, L.I. (2006). Toward Physics of the Mind: Concepts, Emotions, Consciousness, and Symbols. Phys. Life Rev. 3(1), pp.22-55.〕
〔(): Deming, R.W.,
Automatic buried mine detection using the maximum likelihoodadaptive neural system (MLANS), in Proceedings of ''Intelligent Control (ISIC)'', 1998. Held jointly with ''IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), Intelligent Systems and Semiotics (ISAS)''〕
〔(): MDA Technology Applications Program web site〕
〔(): Cangelosi, A.; Tikhanoff, V.; Fontanari, J.F.; Hourdakis, E., Integrating Language and Cognition: A Cognitive Robotics Approach, Computational Intelligence Magazine, IEEE, Volume 2, Issue 3, Aug. 2007 Page(s):65 - 70〕
〔(): Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense III (Proceedings Volume),Editor(s): Edward M. Carapezza, Date: 15 September 2004,ISBN 978-0-8194-5326-6, See Chapter: ''Counter-terrorism threat prediction architecture''〕
This framework has been developed by Leonid Perlovsky at the AFRL. NMF is interpreted as a mathematical description of mind’s mechanisms, including concepts, emotions, instincts, imagination, thinking, and understanding. NMF is a multi-level, hetero-hierarchical system. At each level in NMF there are concept-models encapsulating the knowledge; they generate so-called top-down signals, interacting with input, bottom-up signals. These interactions are governed by dynamic equations, which drive concept-model learning, adaptation, and formation of new concept-models for better correspondence to the input, bottom-up signals.
==Concept models and similarity measures==
In the general case, NMF system consists of multiple processing levels. At each level, output signals are the concepts recognized in (or formed from) input, bottom-up signals. Input signals are associated with (or recognized, or grouped into) concepts according to the models and at this level. In the process of learning the concept-models are adapted for better representation of the input signals so that similarity between the concept-models and signals increases. This increase in similarity can be interpreted as satisfaction of an instinct for knowledge, and is felt as aesthetic emotions.
Each hierarchical level consists of N "neurons" enumerated by index n=1,2..N. These neurons receive input, bottom-up signals, X(n), from lower levels in the processing hierarchy. X(n) is a field of bottom-up neuronal synaptic activations, coming from neurons at a lower level. Each neuron has a number of synapses; for generality, each neuron activation is described as a set of numbers,
: \vec X(n) = \, d = 1..D.
,where D is the number or dimensions necessary to describe individual neuron's activation.
Top-down, or priming signals to these neurons are sent by concept-models, Mm(Sm,n)
: \vec M_m(\vec S_m, n), m = 1..M.
,where M is the number of models. Each model is characterized by its parameters, Sm; in the neuron structure of the brain they are encoded by strength of synaptic connections, mathematically, they are given by a set of numbers,
: \vec S_m = \, a = 1..A.
,where A is the number of dimensions necessary to describe invividual model.
Models represent signals in the following way. Suppose that signal X(''n'') is coming from sensory neurons n activated by object m, which is characterized by parameters Sm. These parameters may include position, orientation, or lighting of an object m. Model Mm(Sm,n) predicts a value X(n) of a signal at neuron n. For example, during visual perception, a neuron n in the visual cortex receives a signal X(n) from retina and a priming signal Mm(Sm,n) from an object-concept-model ''m''. Neuron ''n'' is activated if both the bottom-up signal from lower-level-input and the top-down priming signal are strong. Various models compete for evidence in the bottom-up signals, while adapting their parameters for better match as described below. This is a simplified description of perception. The most benign everyday visual perception uses many levels from retina to object perception. The NMF premise is that the same laws describe the basic interaction dynamics at each level. Perception of minute features, or everyday objects, or cognition of complex abstract concepts is due to the same mechanism described below. Perception and cognition involve concept-models and learning. In perception, concept-models correspond to objects; in cognition models correspond to relationships and situations.
Learning is an essential part of perception and cognition, and in NMF theory it is driven by the dynamics that increase a similarity measure between the sets of models and signals, L(,). The similarity measure is a function of model parameters and associations between the input bottom-up signals and top-down, concept-model signals. In constructing a mathematical description of the similarity measure, it is important to acknowledge two principles:
:''First'', the visual field content is unknown before perception occurred
:''Second'', it may contain any of a number of objects. Important information could be contained in any bottom-up signal;
Therefore, the similarity measure is constructed so that it accounts for all bottom-up signals, ''X''(''n''),
: L( \, \ ) = \prod_^N.    (1)
This expression contains a product of partial similarities, l(X(n)), over all bottom-up signals; therefore it forces the NMF system to account for every signal (even if one term in the product is zero, the product is zero, the similarity is low and the knowledge instinct is not satisfied); this is a reflection of the first principle. Second, before perception occurs, the mind does not know which object gave rise to a signal from a particular retinal neuron. Therefore a partial similarity measure is constructed so that it treats each model as an alternative (a sum over concept-models) for each input neuron signal. Its constituent elements are conditional partial similarities between signal X(n) and model Mm, l(X(n)|m). This measure is “conditional” on object m being present, therefore, when combining these quantities into the overall similarity measure, L, they are multiplied by r(m), which represent a probabilistic measure of object m actually being present. Combining these elements with the two principles noted above, a similarity measure is constructed as follows:
: L( \, \ ) = \prod_^N }.    (2)
The structure of the expression above follows standard principles of the probability theory: a summation is taken over alternatives, m, and various pieces of evidence, n, are multiplied. This expression is not necessarily a probability, but it has a probabilistic structure. If learning is successful, it approximates probabilistic description and leads to near-optimal Bayesian decisions. The name “conditional partial similarity” for l(X(n)|m) (or simply l(n|m)) follows the probabilistic terminology. If learning is successful, l(n|m) becomes a conditional probability density function, a probabilistic measure that signal in neuron n originated from object m. Then L is a total likelihood of observing signals coming from objects described by concept-model . Coefficients r(m), called priors in probability theory, contain preliminary biases or expectations, expected objects m have relatively high r(m) values; their true values are usually unknown and should be learned, like other parameters Sm.
Note that in probability theory, a product of probabilities usually assumes that evidence is independent. Expression for L contains a product over n, but it does not assume independence among various signals X(n). There is a dependence among signals due to concept-models: each model Mm(Sm,n) predicts expected signal values in many neurons n.
During the learning process, concept-models are constantly modified. Usually, the functional forms of models, Mm(Sm,n), are all fixed and learning-adaptation involves only model parameters, Sm. From time to time a system forms a new concept, while retaining an old one as well; alternatively, old concepts are sometimes merged or eliminated. This requires a modification of the similarity measure L; The reason is that more models always result in a better fit between the models and data. This is a well known problem, it is addressed by reducing similarity L using a “skeptic penalty function,” (Penalty method) p(N,M) that grows with the number of models M, and this growth is steeper for a smaller amount of data N. For example, an asymptotically unbiased maximum likelihood estimation leads to multiplicative p(N,M) = exp(-Npar/2), where Npar is a total number of adaptive parameters in all models (this penalty function is known as Akaike information criterion, see (Perlovsky 2001) for further discussion and references).

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Neural modeling fields」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.